353 research outputs found

    Optimal random perturbations for stochastic approximation using a simultaneous perturbation gradient approximation

    Get PDF
    The simultaneous perturbation stochastic approximation (SPSA) algorithm has recently attracted considerable attention for optimization problems where it is di cult or impossible to obtain a direct gradient of the objective (say, loss) function. The approach is based on a highly e cient simultaneous perturbation approximation to the gradient based on loss function measurements. SPSA is based on picking a simultaneous perturbation (random) vector in a Monte Carlo fashion as part of generating the approximation to the gradient. This paper derives the optimal distribution for the Monte Carlo process. The objective is to minimize the mean square error of the estimate. We also consider maximization of the likelihood that the estimate be con ned within a bounded symmetric region of the true parameter. The optimal distribution for the components of the simultaneous perturbation vector is found to be a symmetric Bernoulli in both cases. We end the paper with a numerical study related to the area of experiment design. 1

    Intrinsic adaptation in autonomous recurrent neural networks

    Full text link
    A massively recurrent neural network responds on one side to input stimuli and is autonomously active, on the other side, in the absence of sensory inputs. Stimuli and information processing depends crucially on the qualia of the autonomous-state dynamics of the ongoing neural activity. This default neural activity may be dynamically structured in time and space, showing regular, synchronized, bursting or chaotic activity patterns. We study the influence of non-synaptic plasticity on the default dynamical state of recurrent neural networks. The non-synaptic adaption considered acts on intrinsic neural parameters, such as the threshold and the gain, and is driven by the optimization of the information entropy. We observe, in the presence of the intrinsic adaptation processes, three distinct and globally attracting dynamical regimes, a regular synchronized, an overall chaotic and an intermittent bursting regime. The intermittent bursting regime is characterized by intervals of regular flows, which are quite insensitive to external stimuli, interseeded by chaotic bursts which respond sensitively to input signals. We discuss these finding in the context of self-organized information processing and critical brain dynamics.Comment: 24 pages, 8 figure

    Optimal sensor configuration for complex systems with application to signal detection in structures

    Get PDF

    Optimal sensor configuration for complex systems

    Get PDF

    Permutation-invariant distance between atomic configurations

    Full text link
    We present a permutation-invariant distance between atomic configurations, defined through a functional representation of atomic positions. This distance enables to directly compare different atomic environments with an arbitrary number of particles, without going through a space of reduced dimensionality (i.e. fingerprints) as an intermediate step. Moreover, this distance is naturally invariant through permutations of atoms, avoiding the time consuming associated minimization required by other common criteria (like the Root Mean Square Distance). Finally, the invariance through global rotations is accounted for by a minimization procedure in the space of rotations solved by Monte Carlo simulated annealing. A formal framework is also introduced, showing that the distance we propose verifies the property of a metric on the space of atomic configurations. Two examples of applications are proposed. The first one consists in evaluating faithfulness of some fingerprints (or descriptors), i.e. their capacity to represent the structural information of a configuration. The second application concerns structural analysis, where our distance proves to be efficient in discriminating different local structures and even classifying their degree of similarity

    Variational quantum Monte Carlo simulations with tensor-network states

    Get PDF
    We show that the formalism of tensor-network states, such as the matrix product states (MPS), can be used as a basis for variational quantum Monte Carlo simulations. Using a stochastic optimization method, we demonstrate the potential of this approach by explicit MPS calculations for the transverse Ising chain with up to N=256 spins at criticality, using periodic boundary conditions and D*D matrices with D up to 48. The computational cost of our scheme formally scales as ND^3, whereas standard MPS approaches and the related density matrix renromalization group method scale as ND^5 and ND^6, respectively, for periodic systems.Comment: 4+ pages, 2 figures. v2: improved data, comparisons with exact results, to appear in Phys Rev Let

    Well-Tempered Metadynamics: A Smoothly Converging and Tunable Free-Energy Method

    Full text link
    We present a method for determining the free energy dependence on a selected number of collective variables using an adaptive bias. The formalism provides a unified description which has metadynamics and canonical sampling as limiting cases. Convergence and errors can be rigorously and easily controlled. The parameters of the simulation can be tuned so as to focus the computational effort only on the physically relevant regions of the order parameter space. The algorithm is tested on the reconstruction of alanine dipeptide free energy landscape

    Variational ground states of 2D antiferromagnets in the valence bond basis

    Full text link
    We study a variational wave function for the ground state of the two-dimensional S=1/2 Heisenberg antiferromagnet in the valence bond basis. The expansion coefficients are products of amplitudes h(x,y) for valence bonds connecting spins separated by (x,y) lattice spacings. In contrast to previous studies, in which a functional form for h(x,y) was assumed, we here optimize all the amplitudes for lattices with up to 32*32 spins. We use two different schemes for optimizing the amplitudes; a Newton/conjugate-gradient method and a stochastic method which requires only the signs of the first derivatives of the energy. The latter method performs significantly better. The energy for large systems deviates by only approx. 0.06% from its exact value (calculated using unbiased quantum Monte Carlo simulations). The spin correlations are also well reproduced, falling approx. 2% below the exact ones at long distances. The amplitudes h(r) for valence bonds of long length r decay as 1/r^3. We also discuss some results for small frustrated lattices.Comment: v2: 8 pages, 5 figures, significantly expanded, new optimization method, improved result

    Method of measurements with random perturbation: Application in photoemission experiments

    Full text link
    We report an application of a simultaneous perturbation stochastic approximation (SPSA) algorithm to filtering systematic noise (SN) with non-zero mean value in photoemission data. In our analysis we have used a series of 50 single-scan photoemission spectra of W(110) surface where randomly chosen SN was added. It was found that the SPSA-evaluated spectrum is in good agreement with the spectrum measured without SN. On the basis of our results a wide application of SPSA for evaluation of experimental data is anticipated.Comment: 11 pages, 3 figure
    corecore